专利摘要:
A data processing system comprises a cluster of devices (16) interconnected for the communication of data in streams, particularly digital audio and/or video data. One of the devices (10) is a source device for at least two data streams to be sent to one or more other devices (12, 14) as destination devices of the cluster. To enable synchronization of the stream presentations by the destination devices, some or all of the devices (10, 12, 14) carry respective tables (11, 13, 15) identifying, for that device, an identifier for each type of data stream that the device can process together with the processing delay for that stream. The or each such table is accessible via the cluster connection (18) to whichever of the devices, at source, destination or in between for the signal, is handling application of the necessary offsets.
公开号:US20010008531A1
申请号:US09/759,184
申请日:2001-01-12
公开日:2001-07-19
发明作者:Peter Lanigan;Nicoll Shepherd
申请人:US Philips Corp;
IPC主号:H04L12-2805
专利说明:
[0001] The present invention relates to systems composed of a plurality of devices clustered for the exchange of audio and/or video data and control messages via wired or wireless link and, in particular although not essentially, to such systems where different data components from a source device are to be routed to respective and separate other devices of the system. The invention further relates to devices for use in such systems. [0001]
[0002] Networking or interconnection of devices has long been known and used, starting from basic systems where different system functions have been provided by separate units, for example hi-fi or so-called home cinema systems. A development has been the so-called home bus systems where a greater variety of products have been linked with a view to providing enhanced overall functionality in for example domestic audio/video apparatus coupled with a home security system and the use of telephone. An example of such a home bus system is the domestic digital bus (D2B), the communications protocols for which have been issued as standard IEC 1030 by the International Electrotechnical Commission in Geneva, Switzerland. The D2B system provides a single wire control bus to which all devices are interfaced with messages carried between the various devices of the system in a standardised form of data packet. [0002]
[0003] A particular problem that can occur with distributed systems such as hi-fi and home cinema is loss of synchronisation between different components required to be presented to a user simultaneously, in particular for video images and an accompanying soundtrack, or between different components of the soundtrack, where the different components are to be handled by different devices—for example in a home cinema set-up. This loss of synchronisation may occur due differences in the effective lengths of the transmission paths for the differing components resulting in, or due to, different latencies in decoders or intermediate processing stages for the different components. [0003]
[0004] One way to approach the synchronisation problem, where all the components are decoded within a single device, is described in U.S. Pat. No. 5,430,485 (Lankford et al) which describes a receiver for decoding associated compressed video and audio information components transmitted in mutually exclusive frames of data, each with a respective presentation time stamp. A coarse synchronisation is applied by selectively dropping frames of one or other of the components and then fine tuning by adjusting the audio stream clock frequency. [0004]
[0005] Another approach, this time closer to the source for different components being sent out, is described in U.S. Pat. No. 5,594,660 (Sung et al) which provides an audio/video decoder/decompressor for receiving and separating the components of an encoded and compressed data stream. Within the decoder/decompressor, Sung has means for breaking up a compound AV stream and then applying an appropriate temporal offset to each stream to achieve synchronisation of the outputs during playback. The differential buffering by FIFO units follows the system decoder but precedes the decoding of the audio or of the video. [0005]
[0006] Although handling the component delays with the components still encoded generally involves less processing, handling of synchronisation (particularly if done at source) can create its own problems when it comes to determining just how much delay is to be applied to each component stream. [0006]
[0007] It is accordingly an object of the present invention to provide a networked system of devices including enabling means for synchronising components intended to be presented synchronously to a user of the system. [0007]
[0008] In accordance with the present invention there is provided a data processing system comprising a cluster of devices interconnected for the communication of data in streams wherein, for at least two data streams to be sent to one or more devices as destination devices of said cluster, at least one device of the cluster comprises buffering means arranged to apply a respective delay to at least one of said at least two data streams in an amount determined by differing signal path latencies for said at least two streams; wherein at least some devices of the cluster maintain a respective table, readable via said interconnection by other devices of said cluster, each such table identifying one or more latencies for the respective device, and the means arranged to apply delays applying delays on the basis of table contents. By the use of respective tables, which are suitably (but not essentially) carried by all destination devices, the determination of what delay to apply to each data stream made be made more simply and greater flexibility is introduced to the system in that changes to processing arrangements may just require a table entry to be altered, rather than a large-scale revision of the recorded operational parameters typically held by networked devices. [0008]
[0009] Each table may identify, for its respective device, signal processing capabilities for that device, together with the latency associated with each such capability. Where one of the devices is a source device for said at least two data streams to be sent to said destination devices of said cluster, said source device may include the means arranged to apply a delay together with means arranged to read data from said respective tables of the destination devices and determine the respective delay to apply to at least one of said at least two data streams. In such an arrangement, the source device may further comprise multiplexing means coupled with the means arranged to apply a delay and arranged to combine said at least two streams into a single data stream for transmission to said destination devices. [0009]
[0010] Whilst simple figures for the respective delays may be held in each table, one or more table entries may be in the form of an algorithm requiring data from the device reading the table to enable determination of the latency of the device holding said table. For this, the determination on the basis of the algorithm may be implemented by the device reading the table, said device having downloaded the algorithm from the device holding the table: alternatively, the determination may be implemented by the device holding the table, with the results of the implementation being transmitted via said interconnection to the device reading the table. [0010]
[0011] The means arranged to apply a delay may suitably comprise buffering means (i.e. a memory device with controls over the rates of writing to, and reading from, such a device). Alternatively, the means arranged to apply a delay may comprise means arranged to selectively apply a delay to reading of one or each of said data streams from a source thereof. In this latter option, the delay means may be implemented by selective control over the reading of the data streams from (for example) disc. [0011]
[0012] The present invention also provides a data processing apparatus comprising the technical features of a source device in a system as recited hereinabove and as claimed in the claims attached hereto, to which the readers attention is now directed. [0012]
[0013] Further features and advantages of the present invention will become apparent from reading of the description of preferred embodiments of the invention, given by way of example only and with reference to the accompanying drawings, in which: [0013]
[0014] FIG. 1 represents an arrangement of three interconnected devices forming an audio/video cluster; [0014]
[0015] FIG. 2 represents a table of latency information as carried by one of the devices in the cluster of FIG. 1; [0015]
[0016] FIG. 3 represents a configuration of source device suitable to embody the present invention; and [0016]
[0017] FIG. 4 represents an alternative (wireless) interconnected cluster suitable to embody the present invention. [0017]
[0018] A first arrangement of interconnected devices is shown in FIG. 1, with three devices [0018] 10, 12, 14 forming a cluster 16 based around a respective bus 18 supporting communication in accordance with IEEE Standard 1394 connect and communications protocols. In the following example, reference is made to IEEE 1394, and the disclosure of the specification of this protocol is incorporated herein by reference. As will be recognised by the skilled reader, however, conformance with such protocol is not essential to the operation of the present invention.
[0019] The devices in the cluster [0019] 16 comprise a source device 10 coupled via bus 18 to a pair of presentation devices, in this example a television 12 for showing the image component of a combined AV stream from the source, and an audio processor and playback system 14 for reproducing the audio component of the AV stream.
[0020] In order to synchronise the presentation to a user of the audio and video components, a device on the network must arrange for some stream components (in this example the audio component) to be delayed relative to the others (in this case video). In the FIG. 1 example, if a data stream from the source [0020] 10 to the two destination devices 12, 14 consists of MPEG2 video and AC3 audio, where the processing delay for MPEG2 in the television 12 is 1.0 second, and the processing delay for AC3 audio in the audio presentation device 14 is 0.1 seconds, the audio signal must be delayed by (1.0−0.1)=0.9 seconds at some point along its signal path to achieve synchronisation. One technique for applying this delay is described in our co-pending commonly-assigned application entitled “Interconnection of Audio/Video Devices” and will be briefly described hereinafter with respect to FIG. 3. In order to be able to arrange for these delays, the system must have some means for determining the processing delays for the various types of data stream.
[0021] In order to enable application of the appropriate delay to counter latency in a matching stream, particularly for those devices supporting more than one processing capability (e.g. MPEG2 and DV; AC3 and MP3), each device in the cluster [0021] 10, 12, 14 is provided with a respective internal or remotely stored look-up table 11, 13, 15. In this table (an example of which is given in FIG. 2 for the television 12 from FIG. 1) there will be one entry for each type of stream that a device can process. The entry will consist of, at least, an identifier for the type of stream, and a processing delay for that stream. Other information about the stream may be contained in the table, as required.
[0022] In certain circumstances the system may support changes to the specified delays in response to user input varying one or other of the preset audio parameters. The notification for such changes will generally be in the form of a protocol-supported notification and the extent to which some or all devices of the cluster detect and record the effects of the change against their particular parameters as stored in their respective table will depend on the extent to which they follow the protocol. This also applies to their ability to read updated tables from other devices of the cluster. [0022]
[0023] A single type of stream may have different processing delays if, for example, the device has different processing delays for various ranges of bit rates for that type of stream. Also, as shown by the entry for MPEG7 stream types, an entry may consist of an algorithm to determine the delay. For example, if the delay was 0.1 seconds for every megabit per second of incoming data, the formula of (0.1*x) seconds, where x is the number of megabits per second, could be stored. With an algorithm in the table, it is either packaged such as to be available for downloading by a device seeking to determine delays, or the enquiring device may be required to submit parameter values (e.g. x) to the device holding the algorithm in its table, which device would then calculate the delay and return the value to the enquiring device. The table may be accessed by some form of read transaction (e.g. “read” operations conforming to IEEE 1394 protocols), a command protocol (e.g. AV/C), a remote method invocation protocol (e.g. request messages according to the Home Audio/Video interoperability standard—HAVi—based around IEEE1394), various Java™ RMI procedures or some other method. [0023]
[0024] As mentioned above, and described in greater detail in our co-pending application, one possible configuration for the source device [0024] 10 comprises an audio stream buffer 20 and a video stream buffer 22 for receiving separate input components from a remote signal source 24. Under the direction of a controlling processor 30, which reads the processing latencies from the tables (not shown) for destination devices 12, 14, the buffers are used to apply a respective delay to at least one of the two data streams to combat the differing processing latencies in the video 12 and audio 14 destination devices. Also under the direction of the processor 30, a multiplexer stage 32 combines the temporally offset audio and video from the respective buffers into a single data stream for transmission via the 1394 bus 18.
[0025] Whilst the signals in the respective buffers [0025] 20, 22 may simply be read out and recombined, the source device optionally further comprises data processing means interposed in the signal path between the buffers 20, 22 and the multiplexer 32. As shown, this further data processing means may take the form of an audio signal processor ASP 34 on the output to the audio signal buffer and a video signal processor VSP 36 on the output to the video signal buffer.
[0026] The first and second data streams (audio and video) may be encoded according to a first communications protocol such as MPEG1 or 2, and the destination devices [0026] 12, 14 are each provided with a respective decoder 40, 42 operating according to the said protocol.
[0027] From reading the present disclosure, other modifications and variations will be apparent to persons skilled in the art, including equivalents and features which are already known in the field of bus-connected and cordless communication systems and components and which may be used instead of or in addition to features already disclosed herein. For example, as shown by FIG. 4, the source [0027] 58 may comprise an optical or magnetic disk reader and, instead of a digital data bus, the data channel from source 60 to destination devices 62, 64, 66 may be a wireless communications link 68 for which each of the destination devices is provided with at least a receiver and the source device is provided with at least a transmitter. The system may comprise many more devices than illustrated herein including, for example, two or more source devices, and some devices of the system may have the technical features of both source and destination (for example a video cassette record and playback deck) with the appropriate source/destination behaviour being selected in dependence on the context.
[0028] In the foregoing we have described a data processing system that comprises a cluster of devices interconnected for the communication of data in streams, particularly digital audio and/or video data. One of the devices is a source device for at least two data streams to be sent to one or more other devices as destination devices of the cluster. To enable synchronisation of the stream presentations by the destination devices, some or all of the devices carry respective tables identifying, for that device, an identifier for each type of data stream that the device can process together with the processing delay for that stream. The or each such table is accessible via the cluster connection to whichever of the devices, at source, destination or in between for the signal, is handling application of the necessary offsets. [0028]
权利要求:
Claims (12)
[1" id="US-20010008531-A1-CLM-00001] 1. A data processing system comprising a cluster of devices interconnected for the communication of data in streams wherein, for at least two data streams to be sent to one or more devices as destination devices of said cluster, at least one device of the cluster comprises means arranged to apply a respective delay to at least one of said at least two data streams in an amount determined by differing signal path latencies for said at least two streams; wherein at least some devices of the cluster maintain a respective table, readable via said interconnection by other devices of said cluster, each such table identifying one or more latencies for the respective device, and the means arranged to apply a delay operating to apply delays on the basis of table contents.
[2" id="US-20010008531-A1-CLM-00002] 2. A system as claimed in
claim 1 , wherein each table identifies, for its respective device, signal processing capabilities for that device, together with the latency associated with each such capability.
[3" id="US-20010008531-A1-CLM-00003] 3. A system as claimed in
claim 1 , wherein one of said devices is a source device for said at least two data streams to be sent to said destination devices of said cluster, said source device including said means arranged to apply a delay together with means arranged to read data from said respective tables of the destination devices and determine the respective delay to apply to at least one of said at least two data streams.
[4" id="US-20010008531-A1-CLM-00004] 4. A system as claimed in
claim 3 , wherein said source device further comprises multiplexing means coupled with said means arranged to apply a delay and arranged to combine said at least two streams into a single data stream for transmission to said destination devices.
[5" id="US-20010008531-A1-CLM-00005] 5. A system as claimed in
claim 2 , wherein one or more table entries is in the form of an algorithm requiring data from the device reading the table to enable determination of the latency of the device holding said table.
[6" id="US-20010008531-A1-CLM-00006] 6. A system as claimed in
claim 5 , wherein the determination on the basis of the algorithm is implemented by the device reading the table, said device having downloaded the algorithm from the device holding the table.
[7" id="US-20010008531-A1-CLM-00007] 7. A system as claimed in
claim 5 , wherein the determination on the basis of the algorithm is implemented by the device holding the table, the results of the implementation being transmitted via said interconnection to the device reading the table.
[8" id="US-20010008531-A1-CLM-00008] 8. A system as claimed in
claim 1 , wherein all destination devices maintain a respective table.
[9" id="US-20010008531-A1-CLM-00009] 9. A system as claimed in
claim 1 , wherein said means arranged to apply a delay comprises buffering means.
[10" id="US-20010008531-A1-CLM-00010] 10. A system as claimed in
claim 1 , wherein said means arranged to apply a delay comprises means arranged to selectively apply a delay to reading of one or each of said data streams from a source thereof.
[11" id="US-20010008531-A1-CLM-00011] 11. Data processing apparatus comprising the technical features of a source device in a system as claimed in
claim 3 .
[12" id="US-20010008531-A1-CLM-00012] 12. Data processing apparatus as claimed in
claim 11 , further comprising the technical features of a destination device in a system as claimed in any of
claims 1 to
10 .
类似技术:
公开号 | 公开日 | 专利标题
US7136399B2|2006-11-14|Latency handling for interconnected devices
US20010008535A1|2001-07-19|Interconnection of audio/video devices
US6954467B1|2005-10-11|Clustered networked devices
US5568403A|1996-10-22|Audio/video/data component system bus
JP4990762B2|2012-08-01|Maintaining synchronization between streaming audio and streaming video used for Internet protocols
KR100378718B1|2003-07-07|Method and apparatus for transmitting data packets
WO2001008366A9|2001-06-21|Apparatus and method for media access control
US20010015983A1|2001-08-23|Digital audio-video network system
US6026434A|2000-02-15|Data transmission processing system
US20060197880A1|2006-09-07|Signal processing device and stream processing method
US20030122964A1|2003-07-03|Synchronization network, system and method for synchronizing audio
MXPA01009349A|2002-06-05|Latency handling for interconnected devices
US20020188752A1|2002-12-12|Control messaging for an entertainment and communications network
JP4704651B2|2011-06-15|Method for transmitting stream, transmitter, and transmission system
JPH10313448A|1998-11-24|Moving image transmitter and receiver
JP4681723B2|2011-05-11|Playback apparatus, control method, and storage medium
WO2004010695A1|2004-01-29|Transmission system, transmission device, program thereof, and recording medium
Weihs2006|Convergence of real-time audio and video streaming technologies
JPH11205760A|1999-07-30|Multiplexing device
JP2003284033A|2003-10-03|Near video on-demand apparatus
JP2000307618A|2000-11-02|Data transmission system, transmission device and reception device, and method thereof
同族专利:
公开号 | 公开日
WO2001051340A3|2002-03-21|
US20070002886A1|2007-01-04|
ES2231307T3|2005-05-16|
WO2001051340A2|2001-07-19|
KR20010112331A|2001-12-20|
DE60015362T2|2005-11-03|
EP1208030B1|2004-10-27|
DE60015362D1|2004-12-02|
EP1208030A2|2002-05-29|
BR0008996A|2002-01-08|
PL350630A1|2003-01-27|
GB0000874D0|2000-03-08|
CN1192628C|2005-03-09|
JP2003520006A|2003-06-24|
US7136399B2|2006-11-14|
CN1367984A|2002-09-04|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
US5668601A|1994-02-17|1997-09-16|Sanyo Electric Co., Ltd.|Audio/video decoding system|
US5913031A|1994-12-02|1999-06-15|U.S. Philips Corporation|Encoder system level buffer management|
US5570372A|1995-11-08|1996-10-29|Siemens Rolm Communications Inc.|Multimedia communications with system-dependent adaptive delays|
US5784572A|1995-12-29|1998-07-21|Lsi Logic Corporation|Method and apparatus for compressing video and voice signals according to different standards|
US6148135A|1996-01-29|2000-11-14|Mitsubishi Denki Kabushiki Kaisha|Video and audio reproducing device and video decoding device|
US6163646A|1996-10-29|2000-12-19|Nec Corporation|Apparatus for a synchronized playback of audio-video signals|
US6275507B1|1997-09-26|2001-08-14|International Business Machines Corporation|Transport demultiplexor for an MPEG-2 compliant data stream|
US6356567B2|1997-09-26|2002-03-12|International Business Machines Corporation|Embedded clock recovery and difference filtering for an MPEG-2 compliant transport stream|
US6130987A|1997-10-02|2000-10-10|Nec Corporation|Audio-video synchronous playback apparatus|
US6349286B2|1998-09-03|2002-02-19|Siemens Information And Communications Network, Inc.|System and method for automatic synchronization for multimedia presentations|
US6665872B1|1999-01-06|2003-12-16|Sarnoff Corporation|Latency-based statistical multiplexing|FR2848056A1|2002-11-28|2004-06-04|Canon Kk|Audiovisual domestic digital bus heterogeneous network destination node information synchronization having input node second synchronization packet set following first packet and inserting synchronization mark second packet|
WO2006008696A1|2004-07-15|2006-01-26|Koninklijke Philips Electronics N.V.|Measurement system for delay between two signals transmitted via two transmission paths|
US20060140265A1|2004-12-29|2006-06-29|Adimos Inc.|System circuit and method for transmitting media related data|
US20060168227A1|2004-11-24|2006-07-27|Nokia Corporation|System, method, device, module and computer code product for progressively downloading a content file|
US20060209210A1|2005-03-18|2006-09-21|Ati Technologies Inc.|Automatic audio and video synchronization|
US20070065112A1|2005-09-16|2007-03-22|Seiko Epson Corporation|Image and sound output system, image and sound data output device, and recording medium|
US20080252782A1|2005-08-26|2008-10-16|Junichi Komeno|Signal Source Device|
US20090073316A1|2005-04-28|2009-03-19|Naoki Ejima|Lip-sync correcting device and lip-sync correcting method|
US20090215538A1|2008-02-22|2009-08-27|Samuel Jew|Method for dynamically synchronizing computer network latency|
US20090290064A1|2008-05-23|2009-11-26|Yamaha Corporation|AV System|
US20100214480A1|2009-02-19|2010-08-26|Sanyo Electric Co., Ltd.|HDMI Device and Electronic Device|
US7936705B1|2007-08-16|2011-05-03|Avaya Inc.|Multiplexing VoIP streams for conferencing and selective playback of audio streams|
US20110179187A1|2010-01-20|2011-07-21|Fujitsu Limited|Storage apparatus, switch and storage apparatus control method|
US20130132525A1|2008-12-04|2013-05-23|Google Inc.|Adaptive playback with look-ahead|
US20130278826A1|2011-09-30|2013-10-24|Tondra J. Schlieski|System, methods, and computer program products for multi-stream audio/visual synchronization|WO1990010993A1|1989-03-16|1990-09-20|Fujitsu Limited|Video/audio multiplex transmission system|
US5396497A|1993-02-26|1995-03-07|Sony Corporation|Synchronization of audio/video information|
US5430485A|1993-09-30|1995-07-04|Thomson Consumer Electronics, Inc.|Audio/video synchronization in a digital transmission system|
US5594660A|1994-09-30|1997-01-14|Cirrus Logic, Inc.|Programmable audio-video synchronization method and apparatus for multimedia systems|
US5751694A|1995-05-22|1998-05-12|Sony Corporation|Methods and apparatus for synchronizing temporally related data streams|
US5953049A|1996-08-02|1999-09-14|Lucent Technologies Inc.|Adaptive audio delay control for multimedia conferencing|
US5969750A|1996-09-04|1999-10-19|Winbcnd Electronics Corporation|Moving picture camera with universal serial bus interface|US20020002039A1|1998-06-12|2002-01-03|Safi Qureshey|Network-enabled audio device|
GB0108044D0|2001-03-30|2001-05-23|British Telecomm|Application synchronisation|
WO2002079896A2|2001-03-30|2002-10-10|British Telecommunications Public Limited Company|Multi-modal interface|
KR100423129B1|2002-02-08|2004-03-16|주식회사 휴맥스|Method for controling data output timing in digital broadcasting receiver|
WO2003084173A1|2002-03-28|2003-10-09|British Telecommunications Public Limited Company|Synchronisation in multi-modal interfaces|
US7430223B2|2002-08-28|2008-09-30|Advanced Micro Devices, Inc.|Wireless interface|
KR100456636B1|2002-11-25|2004-11-10|한국전자통신연구원|Architecture of look-up service in jini-based home network supporting ieee 1394 and tcp/ip and method thereof|
US8028038B2|2004-05-05|2011-09-27|Dryden Enterprises, Llc|Obtaining a playlist based on user profile matching|
US8028323B2|2004-05-05|2011-09-27|Dryden Enterprises, Llc|Method and system for employing a first device to direct a networked audio device to obtain a media item|
KR100651894B1|2004-07-23|2006-12-06|엘지전자 주식회사|Display device and control method of the same|
EP1657929A1|2004-11-16|2006-05-17|Thomson Licensing|Device and method for synchronizing different parts of a digital service|
KR100652956B1|2005-01-14|2006-12-01|삼성전자주식회사|Method for informing video receiving delay and broadcast receving apparatus thereof|
EP1872533B1|2005-04-22|2019-05-22|Audinate Pty Limited|Network, device and method for transporting digital media|
EP2033361B1|2006-05-17|2015-10-07|Audinate Pty Limited|Transmitting and receiving media packet streams|
EP2165541B1|2007-05-11|2013-03-27|Audinate Pty Ltd|Systems, methods and computer-readable media for configuring receiver latency|
US8054382B2|2007-05-21|2011-11-08|International Business Machines Corporation|Apparatus, method and system for synchronizing a common broadcast signal among multiple television units|
US8270586B2|2007-06-26|2012-09-18|Microsoft Corporation|Determining conditions of conferences|
US8332898B2|2007-08-09|2012-12-11|Echostar Technologies L.L.C.|Apparatus, systems and methods to synchronize communication of content to a presentation device and a mobile device|
CN101118776B|2007-08-21|2012-09-05|中国科学院计算技术研究所|Method, system and device for realizing audio and video data synchronizing|
US9467735B2|2007-09-04|2016-10-11|Apple Inc.|Synchronizing digital audio and analog video from a portable media device|
JP4958748B2|2007-11-27|2012-06-20|キヤノン株式会社|Audio processing device, video processing device, and control method thereof|
US9497103B2|2008-02-29|2016-11-15|Audinate Pty Limited|Isochronous local media network for performing discovery|
US8913104B2|2011-05-24|2014-12-16|Bose Corporation|Audio synchronization for two dimensional and three dimensional video signals|
US10450193B2|2012-03-30|2019-10-22|Monsanto Technology Llc|Alcohol reformer for reforming alcohol to mixture of gas including hydrogen|
CN104794096A|2015-01-21|2015-07-22|李振华|Personal work system capable of being dynamically combined and adjusted|
CN106453306A|2016-10-08|2017-02-22|广东欧珀移动通信有限公司|Media data transmission synchronous method, device and system|
JP6956354B2|2018-08-30|2021-11-02|パナソニックIpマネジメント株式会社|Video signal output device, control method, and program|
法律状态:
2005-09-29| AS| Assignment|Owner name: U.S. PHILIPS CORPORATION, NEW YORK Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LANIGAN, PETER J.;SHEPHERD, NICOLL B.;REEL/FRAME:016847/0464;SIGNING DATES FROM 20001107 TO 20001108 Owner name: U.S. PHILIPS CORPORATION, NEW YORK Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LANIGAN, PETER J.;SHEPHERD, NICOLL B.;REEL/FRAME:016847/0422;SIGNING DATES FROM 20001107 TO 20001108 |
2006-06-20| AS| Assignment|Owner name: KONINKLIJKE PHILIPS ELECTRONICS, N.V., NETHERLANDS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:U.S. PHILIPS CORPORATION;REEL/FRAME:017825/0919 Effective date: 20060619 |
2010-06-21| REMI| Maintenance fee reminder mailed|
2010-11-14| LAPS| Lapse for failure to pay maintenance fees|
2010-12-13| STCH| Information on status: patent discontinuation|Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
2011-01-04| FP| Lapsed due to failure to pay maintenance fee|Effective date: 20101114 |
优先权:
申请号 | 申请日 | 专利标题
GB0000874.8||2000-01-14||
GBGB0000874.8A|GB0000874D0|2000-01-14|2000-01-14|Latency handling for interconnected devices|US11/471,221| US20070002886A1|2000-01-14|2006-06-20|Latency handling for interconnected devices|
[返回顶部]